163,870 research outputs found

    Change point analysis of second order characteristics in non-stationary time series

    Get PDF
    An important assumption in the work on testing for structural breaks in time series consists in the fact that the model is formulated such that the stochastic process under the null hypothesis of "no change-point" is stationary. This assumption is crucial to derive (asymptotic) critical values for the corresponding testing procedures using an elegant and powerful mathematical theory, but it might be not very realistic from a practical point of view. This paper develops change point analysis under less restrictive assumptions and deals with the problem of detecting change points in the marginal variance and correlation structures of a non-stationary time series. A CUSUM approach is proposed, which is used to test the "classical" hypothesis of the form H0:θ1=θ2H_0: \theta_1=\theta_2 vs. H1:θ1≠θ2H_1: \theta_1 \not =\theta_2, where θ1\theta_1 and θ2\theta_2 denote second order parameters of the process before and after a change point. The asymptotic distribution of the CUSUM test statistic is derived under the null hypothesis. This distribution depends in a complicated way on the dependency structure of the nonlinear non-stationary time series and a bootstrap approach is developed to generate critical values. The results are then extended to test the hypothesis of a {\it non relevant change point}, i.e. H0:∣θ1−θ2∣≤δH_0: | \theta_1-\theta_2 | \leq \delta, which reflects the fact that inference should not be changed, if the difference between the parameters before and after the change-point is small. In contrast to previous work, our approach does neither require the mean to be constant nor - in the case of testing for lag kk-correlation - that the mean, variance and fourth order joint cumulants are constant under the null hypothesis. In particular, we allow that the variance has a change point at a different location than the auto-covariance.Comment: 64 pages, 5 figure

    Inference of synchrosqueezing transform -- toward a unified statistical analysis of nonlinear-type time-frequency analysis

    Full text link
    We provide a statistical analysis of a tool in nonlinear-type time-frequency analysis, the synchrosqueezing transform (SST), for both the null and non-null cases. The intricate nonlinear interaction of different quantities in the SST is quantified by carefully analyzing relevant multivariate complex Gaussian random variables. Several new results for such random variables are provided, and a central limit theorem result for the SST is established. The analysis sheds lights on bridging time-frequency analysis to time series analysis and diffusion geometry

    Ranking and Selection under Input Uncertainty: Fixed Confidence and Fixed Budget

    Full text link
    In stochastic simulation, input uncertainty (IU) is caused by the error in estimating the input distributions using finite real-world data. When it comes to simulation-based Ranking and Selection (R&S), ignoring IU could lead to the failure of many existing selection procedures. In this paper, we study R&S under IU by allowing the possibility of acquiring additional data. Two classical R&S formulations are extended to account for IU: (i) for fixed confidence, we consider when data arrive sequentially so that IU can be reduced over time; (ii) for fixed budget, a joint budget is assumed to be available for both collecting input data and running simulations. New procedures are proposed for each formulation using the frameworks of Sequential Elimination and Optimal Computing Budget Allocation, with theoretical guarantees provided accordingly (e.g., upper bound on the expected running time and finite-sample bound on the probability of false selection). Numerical results demonstrate the effectiveness of our procedures through a multi-stage production-inventory problem

    A Generic Path Algorithm for Regularized Statistical Estimation

    Full text link
    Regularization is widely used in statistics and machine learning to prevent overfitting and gear solution towards prior information. In general, a regularized estimation problem minimizes the sum of a loss function and a penalty term. The penalty term is usually weighted by a tuning parameter and encourages certain constraints on the parameters to be estimated. Particular choices of constraints lead to the popular lasso, fused-lasso, and other generalized l1l_1 penalized regression methods. Although there has been a lot of research in this area, developing efficient optimization methods for many nonseparable penalties remains a challenge. In this article we propose an exact path solver based on ordinary differential equations (EPSODE) that works for any convex loss function and can deal with generalized l1l_1 penalties as well as more complicated regularization such as inequality constraints encountered in shape-restricted regressions and nonparametric density estimation. In the path following process, the solution path hits, exits, and slides along the various constraints and vividly illustrates the tradeoffs between goodness of fit and model parsimony. In practice, the EPSODE can be coupled with AIC, BIC, CpC_p or cross-validation to select an optimal tuning parameter. Our applications to generalized l1l_1 regularized generalized linear models, shape-restricted regressions, Gaussian graphical models, and nonparametric density estimation showcase the potential of the EPSODE algorithm.Comment: 28 pages, 5 figure
    • …
    corecore